DeepHeritageNet: Modeling the Transmission of Intangible Musical Heritage

Overview DeepHeritageNet is a deep learning framework designed to address the challenges in preserving and transmitting intangible musical heritage. This model integrates various computational techniques to simulate the transmission of musical motifs across generations. By leveraging hierarchical architectures, cultural memory embeddings, and context-aware mechanisms, DeepHeritageNet ensures both the fidelity and creative evolution of musical traditions, making it a significant tool for cultural heritage preservation. Key Features Cultural Embedding Transmission Network (CETNet): A novel model that integrates symbolic and neural representations for cultural continuity and stylistic innovation in musical heritage transmission. Context-Attuned Modulated Inheritance (CAMI): A strategy for adapting transmission processes to dynamic cultural and performative contexts. Multimodal Learning: Incorporates diverse input types (e.g., audio, symbolic notations, ethnographic descriptions) for a comprehensive understanding of musical heritage. Generative Modeling: Balances historical preservation with stylistic evolution, enabling the modeling of cultural diffusion and stylistic change. Enhanced Accuracy: Demonstrated superior results in motif recognition and stylistic classification compared to existing models, ensuring high-quality preservation and revitalization of endangered musical traditions. Installation bash # Clone the repository git clone https://github.com/yourusername/DeepHeritageNet.git cd DeepHeritageNet # Install dependencies pip install -r requirements.txt Usage Model Training To train the model, use the following script: bash python train.py --data_path /path/to/your/dataset --epochs 100 --batch_size 64 Model Inference Once trained, you can use the model to generate new motifs or analyze cultural transmission: python from deep_heritage_net import DeepHeritageNet # Initialize model model = DeepHeritageNet() # Load trained weights model.load_state_dict(torch.load('model.pth')) # Generate new motif new_motif = model.generate_motif(context_data) Datasets The model supports multiple datasets, including: Intangible Musical Heritage Audio Dataset: A collection of audio recordings from various regional and ethnic traditions. Traditional Music Style Classification Dataset: For genre-level style classification, including labeled audio samples. Cultural Music Transmission Network Dataset: Models the dissemination of musical practices and stylistic evolution across geographical and social networks. Folk Song Feature Extraction Dataset: Focused on computational analysis of folk music, including pitch contours, harmonic annotations, and lyrical content. Experimental Results DeepHeritageNet has been evaluated on multiple cultural corpora, including: Intangible Musical Heritage Audio Dataset: 91.76% accuracy, 90.33% F1-score. Traditional Music Style Classification Dataset: 93.02% accuracy, 91.45% F1-score. Cultural Music Transmission Network Dataset: 91.29% accuracy, 93.04% AUC. Folk Song Feature Extraction Dataset: 92.58% accuracy, 93.89% AUC. Research & Development DeepHeritageNet incorporates cutting-edge methods in deep learning, including: Transformer-based Sequence Modeling: For capturing long-term dependencies in musical motifs. Cultural Memory Embedding: To preserve historical context while allowing for stylistic drift. Contextual Dynamics and Modulation: Adapts the model’s output to fit cultural and performative contexts, ensuring that outputs remain authentic while allowing for creative variations. Contributing We welcome contributions to improve the system! To contribute: Fork the repository. Create a new branch. Make your changes and submit a pull request. License This project is licensed under the MIT License - see the LICENSE file for details. Acknowledgments We would like to acknowledge the contribution of the ethnomusicological community and all the researchers who have worked on the datasets used in this project. References Zhang, W. et al. (2025). DeepHeritageNet: Applying Deep Learning Models in the Transmission Network of Intangible Musical Heritage. Frontiers in Arts and Humanities.
import torch
import torch.nn as nn
import torch.nn.functional as F

# Define the Cultural Embedding Transmission Network (CETNet)
class CETNet(nn.Module):
    def __init__(self, input_dim, hidden_dim, output_dim):
        super(CETNet, self).__init__()
        
        # Define layers for encoding the motif and cultural memory
        self.encoder = nn.LSTM(input_dim, hidden_dim, batch_first=True)
        self.memory_embedding = nn.Linear(hidden_dim, hidden_dim)  # Cultural memory upd

CultureGraphNet

Overview CultureGraphNet introduces a graph-based structural attention learning framework designed to capture and analyze implicit cultural propagation across corporate networks. Traditional models often overlook the dynamic and heterogeneous nature of organizational interactions. CultureGraphNet addresses this by integrating structural attention mechanisms, graph-based learning, and temporal propagation dynamics, providing both predictive power and interpretability. 🧠 Key Components 1. Structural Attention Graph Network (SAGN) A hierarchical attention-driven graph neural network that dynamically assigns importance to nodes and edges. Learns latent cultural influences. Captures both local and global dependencies in corporate structures. Integrates multimodal encoders and graphical propagation layers. 2. Adaptive Cultural Diffusion Strategy (ACDS) A tailored propagation strategy that simulates reinforcement, resistance, and decay of cultural traits. Models time-dependent cultural evolution. Adapts to changing corporate structures and relationships. 3. Interpretability Framework Provides insight into how cultural traits propagate by identifying influential nodes and relationships within the network. ⚙️ Architecture The schematic diagram on page 6 of the paper illustrates the model’s workflow: Multimodal Encoder Architecture: Extracts hierarchical features using a Swin Transformer backbone with patch merging and attention refinement. Graphical Propagation Layer: Aggregates node-level representations through attention-weighted adjacency matrices. Structural Attention Mechanism: (page 10, Figure 4) Combines multimodal features from visual and textual inputs to learn hierarchical dependencies. Temporal Cultural Dynamics: Simulates evolution of node-level cultural states over time using dynamic attention scores. 🧩 Model Formulation CultureGraphNet models a corporate network as a directed graph: Nodes (V): corporate entities (employees, teams, departments) Edges (E): relationships or interactions Node Features: cultural attributes, contextual data Adjacency Matrix (A): influence weights among entities Core equations: scss X(t + 1) = A · X(t) + F(X(t)) # Cultural propagation α_ij = softmax(LeakyReLU(aᵀ[W₁hᵢ || W₂hⱼ])) # Structural attention h'_u = σ(Σ α_uv W₃h_v) # Node updates cᵢ(t+1) = φ(cᵢ(t), Σ α_ij cⱼ(t), zᵢ) # Temporal dynamics 📊 Datasets CultureGraphNet is evaluated on four major datasets: Dataset Focus Description Corporate Network Interaction Dataset Communication Patterns Logs of emails, chats, and meetings capturing information flow. Organizational Culture Propagation Dataset Cultural Dynamics Longitudinal survey and observational data on cultural norms. Employee Relationship Graph Dataset Social Structures Mapping of formal and informal relationships within teams. Workplace Structural Dynamics Dataset Temporal Changes Data on restructuring, turnover, and adaptability metrics. 🧪 Experimental Results According to Tables 1–4 (pages 12–13): Achieved up to 91.5% accuracy and 90.7% AUC across datasets. Outperformed baselines like ResNet, ViT, DenseNet, and BLIP by 2–4%. Showed strong robustness, efficiency, and interpretability. Ablation studies confirm the critical impact of structural attention, temporal modeling, and interpretability features. ⚙️ Implementation Details Parameter Value Framework PyTorch GPU NVIDIA A100 (40 GB) Optimizer Adam (lr = 1e-3, cosine decay) Batch Size 64 Epochs 100 Dropout 0.5 Weight Decay 1e-4 Data Augmentation MixUp, CutMix, random crop, jitter Metrics Accuracy, Recall, F1, AUC, mAP 🚀 How to Use Installation bash git clone https://github.com/<your-username>/CultureGraphNet.git cd CultureGraphNet pip install -r requirements.txt Training bash python train.py --config configs/culturegraphnet.yaml Evaluation bash python evaluate.py --checkpoint checkpoints/best_model.pth Visualization bash python visualize_attention.py --graph data/sample_graph.json 🧩 Folder Structure bash ├── data/ # Example datasets or loaders ├── models/ # CultureGraphNet model files ├── configs/ # Experiment configurations ├── utils/ # Helper scripts (metrics, visualization) ├── results/ # Logs and output metrics └── README.md # Project documentation 📚 Citation If you use this framework, please cite: css @article{xu2025culturegraphnet, title={CultureGraphNet: Graph-Based Structural Attention Learning for Implicit Culture Propagation in Corporate Networks}, author={Gaofan Xu}, journal={China Three Gorges University}, year={2025} } 📜 License This repository is licensed under the MIT License. See LICENSE for details.
# models/structural_attention.py
from __future__ import annotations

from typing import Optional, Tuple
import torch
from torch import nn, Tensor
import torch.nn.functional as F


class StructuralAttentionConv(nn.Module):
    r"""
    Structural Attention message passing (single head) inspired by the paper's
    formulations (attention over neighbors with separate W1/W2 and vector 'a').&#8203;:contentReference[oaicite:3]{index=3}

    For a directed graph G=(V,E), we compute for edge u->v:
     

M3DecideNet: Multi-Modal Attention-Driven Fusion for Enterprise Management Decision Support

Overview M3DecideNet is a cutting-edge multi-modal attention-driven fusion framework designed to enhance enterprise management decision-making by integrating diverse data sources, such as financial metrics, operational statistics, and external market indicators. This system utilizes a dynamic attention mechanism to optimize predictive accuracy and interpretability in real-time decision-making. Key Features Multi-Modal Fusion: Integrates various data types (text, numerical, visual) using attention mechanisms. Adaptive Decision Strategy: Context-aware fusion strategy to adapt to different enterprise environments. State-of-the-art Performance: Empirical results show superior predictive performance across enterprise management scenarios. Scalability & Flexibility: Suitable for real-time business applications and adaptable to different enterprise contexts. Installation bash # Clone the repository git clone https://github.com/yourusername/M3DecideNet.git cd M3DecideNet # Install dependencies pip install -r requirements.txt Usage python from m3decidenet import M3DecideNet # Initialize the model model = M3DecideNet() # Train the model with your enterprise data model.train(training_data) # Make predictions predictions = model.predict(new_data) Documentation Components Preliminaries: Defines the mathematical foundation for enterprise decision-making, addressing multi-modal data fusion challenges. M3FusionNet: The core model that uses attention mechanisms for dynamic data integration. Adaptive Decision Fusion Strategy (ADFS): A robust strategy to optimize decision-making through attention modulation and regularization. For a deeper dive into the methodology, refer to the research paper linked in the References section. Experiments The framework has been evaluated using multiple datasets, including Enterprise Decision Support Data, Multi-Modal Management Insights, and others. Performance metrics such as Accuracy, Precision, Recall, and AUC show that M3DecideNet outperforms leading models. Evaluation Results Accuracy: 90.45% (on Multi-Modal Management Insights Dataset) Precision: 89.78% Recall: 89.23% AUC: 90.12% Contributing We welcome contributions to improve and expand M3DecideNet. Please follow these steps to contribute: Fork the repository. Create a new branch for your feature or fix. Submit a pull request with a clear description of your changes. License This project is licensed under the MIT License - see the LICENSE file for details. References [Bi et al., 2022] Enterprise strategic management using multi-modal emotion recognition. Frontiers in Psychology. [Ren et al., 2024] Multi-modal fusion for review helpfulness prediction. Information Processing & Management. [Wang et al., 2023] Attentive statement fraud detection with multi-modal financial data. Decision Support Systems.
import torch
import torch.nn as nn
import torch.nn.functional as F

class MultiModalAttention(nn.Module):
    def __init__(self, input_dims, attention_dims):
        super(MultiModalAttention, self).__init__()
        
        self.attention_layers = nn.ModuleList([
            nn.Linear(input_dim, attention_dims) for input_dim in input_dims
        ])
        self.attention_weights = nn.Parameter(torch.ones(len(input_dims)))

    def forward(self, inputs):
        attention_scores

VIM Notes

### Commands / Description
Command | Description
---------|-----------
**Search** |
`/{earch_term}` | searches for `{search_term}` in the document
`?{search_term}` } | searches for `{search_term}` in the document
`CNTRL - D` | resets the cursor
**Substitute** | place cursor on the line 
`:s/old/new/g` | every instance of `old` replaced with `new` if `g` or global is used
`:%s/old/new/gc` | `gc` means with prompt, search and replace WHILE typing
**Execute Terminal Commands** | 
`:!{terminal comma

Multimodal Data Fusion for Evaluating the Effectiveness of Cadre Training

Overview This repository implements the research presented in “Multimodal Data Fusion for Evaluating the Effectiveness of Cadre Training.” The project proposes a unified framework that integrates multimodal data—text, audio, video, and physiological signals—to assess the effectiveness of cadre (leadership) training. It emphasizes interpretability, adaptability, and policy alignment in performance evaluation. 🧠 Key Components Hierarchically Attentive Progression Encoder (HAPE): A temporal encoder that captures cross-time and cross-unit dependencies in performance data using hierarchical attention and GRU-based modeling. (See architecture diagram in Figure 1, page 8 of the paper.) Policy-Aligned Knowledge-Guided Adaptation (PAKGA): A reinforcement learning strategy that integrates institutional policies and domain knowledge into adaptive decision-making. (See schematic on page 11 for workflow illustration.) Multimodal Fusion Framework: Combines heterogeneous data modalities (video, text, audio, sensor data) into interpretable embeddings, improving robustness and accuracy. 📊 Experimental Highlights Achieved ~91% accuracy and ~92% AUC on benchmark datasets: Cadre Training Performance Dataset Multimodal Leadership Assessment Dataset Training Effectiveness Metrics Dataset Behavioral Insights Fusion Dataset Outperforms baseline models like OC-SVM, TranAD, and MSCRED by 4–6%. ⚙️ Implementation Details Frameworks: PyTorch, Hugging Face Transformers Optimizers: AdamW with cosine annealing Learning Rate: 3e-4 (with warmup and decay) Batch Size: 64 per GPU Evaluation Metrics: Accuracy, F1-Score, AUC, MSE, Pearson Correlation 🧩 Folder Structure graphql ├── data/ # Sample datasets or data loading scripts ├── models/ # HAPE and PAKGA model definitions ├── utils/ # Helper functions (training, evaluation, visualization) ├── experiments/ # Configuration files and logs ├── figures/ # Architecture diagrams and result visualizations └── README.md # Project documentation 🚀 Usage Install Dependencies bash pip install -r requirements.txt Train Model bash python train.py --config configs/hape_pakga.yaml Evaluate bash python evaluate.py --checkpoint checkpoints/best_model.pth 🧩 Citation If you use this work, please cite: scss @article{wang2025multimodal, title={Multimodal Data Fusion for Evaluating the Effectiveness of Cadre Training}, author={Wang, Ke}, journal={Shengli Oilfield Party School (Training Center)}, year={2025} } 📜 License This project is released under the MIT License. See LICENSE for details.
# models/hape.py
from __future__ import annotations

import math
from typing import Optional, Tuple, Dict

import torch
from torch import nn, Tensor
import torch.nn.functional as F


class SinusoidalPositionalEncoding(nn.Module):
    """
    Classic transformer-style fixed positional encoding.

    Args:
        dim: feature dimension
        max_len: maximum sequence length supported
    """
    def __init__(self, dim: int, max_len: int = 10_000):
        super().__init__()
        pe = torch.z

Verifica Objetos inválidos

DECLARE
    v_total_pkg     NUMBER := 0;
    v_success_pkg   NUMBER := 0;
    v_error_pkg     NUMBER := 0;
    v_total_prc     NUMBER := 0;
    v_success_prc   NUMBER := 0;
    v_error_prc     NUMBER := 0;
    v_total_fnc     NUMBER := 0;
    v_success_fnc   NUMBER := 0;
    v_error_fnc     NUMBER := 0;
    v_total_trg     NUMBER := 0;
    v_success_trg   NUMBER := 0;
    v_error_trg     NUMBER := 0;
BEGIN
    -- PACKAGES
    FOR pkg IN (SELECT DISTINCT object_name FROM user_objects WHERE object

shareX audio recording parameters

# shareX audio recording parameters

![](https://cdn.cacher.io/attachments/u/3fx93fy4dqwj6/wKxhyPsYdB5zhLVZ92KkCxnxrgaaM702/b2jobpocb.png)

フロントエンドだけでバリデーションしても意味がないサンプル

<form id="purchase-form">
  <label>
    個数(最大2個まで):
    <input type="number" id="quantity" name="quantity" min="1" max="2" required />
  </label>
  <button type="submit">購入</button>
</form>

<script>
  document.getElementById("purchase-form").addEventListener("submit", async (e) => {
    e.preventDefault();

    const quantity = parseInt(document.getElementById("quantity").value, 10);

    // フロント側バリデーション
    if (quantity > 2) {
      alert("2個までしか購入できません");
      return;
    }

    // APIに送信
  

php artisan tinker - add user

Abrir PHP Artisan Tinker
```php
php artisan tinker
```

```
use App\Models\User;
User::create([
    'name' => 'test',
    'email' => 'test@testuser.com',
    'password' => bcrypt('0QcrNCKSQ7uuay2@'),
]);

```

🚧 Ingester

## Exemple d'URL à requêter
https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata_2025-01.parquet

## Paramètres de `__init__`
BASE_URL = 'https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata'
YEAR = '2025'
DATA_DIR = 🚧 Définir avec path relatif par rapport à la racine du projet

🚧 Il faut éventuellement générer le folder data (dans le `__init__` ou dans un 
autre script?)

3350. Adjacent Increasing Subarrays Detection II

Given an array nums of n integers, your task is to find the maximum value of k for which there exist two adjacent subarrays of length k each, such that both subarrays are strictly increasing. Specifically, check if there are two subarrays of length k starting at indices a and b (a < b), where: Both subarrays nums[a..a + k - 1] and nums[b..b + k - 1] are strictly increasing. The subarrays must be adjacent, meaning b = a + k. Return the maximum possible value of k. A subarray is a contiguous non-empty sequence of elements within an array.
/**
 * @param {number[]} nums
 * @return {number}
 */
var maxIncreasingSubarrays = function (nums) {
    let maxK = 0;       // Stores the maximum valid k found so far
    let curLen = 1;     // Length of the current strictly increasing run
    let prevLen = 0;    // Length of the previous strictly increasing run

    for (let i = 1; i < nums.length; i++) {
        if (nums[i] > nums[i - 1]) {
            // Continue the current increasing run
            curLen++;
        } else {
            /

START_END

import pandas as pd
from datetime import datetime, timedelta
import yfinance as yf

# Define the end date
end = str(pd.Timestamp.today().strftime('%Y-%m-%d'))

# Calculate the start date (20 years before the end date)
no_years = 20
start = (datetime.strptime(end, '%Y-%m-%d') - timedelta(days=no_years*365)).strftime('%Y-%m-%d')

# Generate the date range
date_range = pd.date_range(start, end, freq='D')

print(date_range, '\n\n')

tickers = ['SPY', 'MDY']
data = yf.download(ticker

Decompose Flow

Если задача с типом **decomp**

![](https://cdn.cacher.io/attachments/u/3kcbpjvt3jkry/PNj6DgUJ0EqxM_I_aaMwYKXgEuvfRa3G/wq1oqnqid.png)

- нужно создать новую задачу на разработку, где производится описание задачи.
- задачу нужно оценить самому и отдать на оценку QA переведя в статус Requirements Review  (RR).
QA уже дальше переведет в ready to develop

### Линковки
- Задачу по декомпозиции линкуем children задача для разработки
- Задачу по разработке линкуем с задачей по декомпозу parent
- Задачу

🛠️ Setting Up Pre-commit Hooks with UV

# Setting Up Pre-commit Hooks with UV

Pre-commit hooks automatically check your code before each commit, catching issues early and enforcing consistent code quality.

## Installation

Add pre-commit as a development dependency:

```bash
uv add --dev pre-commit
```

## Configuration

Create `.pre-commit-config.yaml` in your project root:

```yaml
repos:
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v4.5.0
    hooks:
      - id: trailing-whitespace
      

git

# 查看远程仓库
git remote -v

# 断联
git remote remove origin

cartopy

import cartopy.crs as ccrs
import cartopy.feature as cfeature
from cartopy.io.shapereader import Reader

sys.path.insert(0, "/data8/xuyf/Project/shouxian")
from configs import MAP_DIR
sheng = Reader(os.path.join(MAP_DIR, 'sheng.shp'))

BDY_DIR = "/data8/xuyf/Data/Static/boundary/GS(2024)0650-SHP"
sheng = Reader(os.path.join(BDY_DIR, 'sheng.shp'))

fig = plt.figure(figsize=(12, 8), dpi=300)
ax = fig.subplots(1, 1, subplot_kw={'projection': ccrs.PlateCarree()})

ax.add_feature(cfeatu